hello
hello
Labels

📌S Retain class distribution for seed 1:
Class 0: 4500
Class 1: 4500
Class 2: 4500
Class 3: 4500
Class 4: 4500
Class 5: 4500
Class 6: 4500
Class 7: 4500
Class 8: 4500
Class 9: 4500

📌S Forget class distribution for seed 1:
Class 0: 500
Class 1: 500
Class 2: 500
Class 3: 500
Class 4: 500
Class 5: 500
Class 6: 500
Class 7: 500
Class 8: 500
Class 9: 500
78090990

📊 Updated class distribution:
Retain set:
  Class 0: 4750
  Class 1: 4750
  Class 2: 4750
  Class 3: 4750
  Class 4: 4750
  Class 5: 4750
  Class 6: 4750
  Class 7: 4750
  Class 8: 4750
  Class 9: 4750
Forget set:
  Class 0: 250
  Class 1: 250
  Class 2: 250
  Class 3: 250
  Class 4: 250
  Class 5: 250
  Class 6: 250
  Class 7: 250
  Class 8: 250
  Class 9: 250
hello
hello
⚠️ Warning: Retain train loader may not be shuffled.
Training Epoch: 1 [256/47500]	Loss: 2.4117	LR: 0.000000
Training Epoch: 1 [512/47500]	Loss: 2.3577	LR: 0.000538
Training Epoch: 1 [768/47500]	Loss: 2.3666	LR: 0.001075
Training Epoch: 1 [1024/47500]	Loss: 2.4074	LR: 0.001613
Training Epoch: 1 [1280/47500]	Loss: 2.3039	LR: 0.002151
Training Epoch: 1 [1536/47500]	Loss: 2.2757	LR: 0.002688
Training Epoch: 1 [1792/47500]	Loss: 2.1893	LR: 0.003226
Training Epoch: 1 [2048/47500]	Loss: 2.2218	LR: 0.003763
Training Epoch: 1 [2304/47500]	Loss: 2.2361	LR: 0.004301
Training Epoch: 1 [2560/47500]	Loss: 2.3007	LR: 0.004839
Training Epoch: 1 [2816/47500]	Loss: 2.2469	LR: 0.005376
Training Epoch: 1 [3072/47500]	Loss: 2.0895	LR: 0.005914
Training Epoch: 1 [3328/47500]	Loss: 2.1867	LR: 0.006452
Training Epoch: 1 [3584/47500]	Loss: 2.0437	LR: 0.006989
Training Epoch: 1 [3840/47500]	Loss: 1.9784	LR: 0.007527
Training Epoch: 1 [4096/47500]	Loss: 1.9640	LR: 0.008065
Training Epoch: 1 [4352/47500]	Loss: 2.0614	LR: 0.008602
Training Epoch: 1 [4608/47500]	Loss: 1.9595	LR: 0.009140
Training Epoch: 1 [4864/47500]	Loss: 1.9228	LR: 0.009677
Training Epoch: 1 [5120/47500]	Loss: 1.9536	LR: 0.010215
Training Epoch: 1 [5376/47500]	Loss: 1.9546	LR: 0.010753
Training Epoch: 1 [5632/47500]	Loss: 1.7868	LR: 0.011290
Training Epoch: 1 [5888/47500]	Loss: 1.8162	LR: 0.011828
Training Epoch: 1 [6144/47500]	Loss: 1.7981	LR: 0.012366
Training Epoch: 1 [6400/47500]	Loss: 1.7174	LR: 0.012903
Training Epoch: 1 [6656/47500]	Loss: 1.7568	LR: 0.013441
Training Epoch: 1 [6912/47500]	Loss: 1.7854	LR: 0.013978
Training Epoch: 1 [7168/47500]	Loss: 1.8144	LR: 0.014516
Training Epoch: 1 [7424/47500]	Loss: 1.7287	LR: 0.015054
Training Epoch: 1 [7680/47500]	Loss: 1.8657	LR: 0.015591
Training Epoch: 1 [7936/47500]	Loss: 1.6661	LR: 0.016129
Training Epoch: 1 [8192/47500]	Loss: 1.7295	LR: 0.016667
Training Epoch: 1 [8448/47500]	Loss: 1.6661	LR: 0.017204
Training Epoch: 1 [8704/47500]	Loss: 1.6947	LR: 0.017742
Training Epoch: 1 [8960/47500]	Loss: 1.5557	LR: 0.018280
Training Epoch: 1 [9216/47500]	Loss: 1.5576	LR: 0.018817
Training Epoch: 1 [9472/47500]	Loss: 1.6122	LR: 0.019355
Training Epoch: 1 [9728/47500]	Loss: 1.6223	LR: 0.019892
Training Epoch: 1 [9984/47500]	Loss: 1.7385	LR: 0.020430
Training Epoch: 1 [10240/47500]	Loss: 1.6144	LR: 0.020968
Training Epoch: 1 [10496/47500]	Loss: 1.7931	LR: 0.021505
Training Epoch: 1 [10752/47500]	Loss: 1.6481	LR: 0.022043
Training Epoch: 1 [11008/47500]	Loss: 1.8000	LR: 0.022581
Training Epoch: 1 [11264/47500]	Loss: 1.7814	LR: 0.023118
Training Epoch: 1 [11520/47500]	Loss: 1.5535	LR: 0.023656
Training Epoch: 1 [11776/47500]	Loss: 1.6400	LR: 0.024194
Training Epoch: 1 [12032/47500]	Loss: 1.7927	LR: 0.024731
Training Epoch: 1 [12288/47500]	Loss: 1.7787	LR: 0.025269
Training Epoch: 1 [12544/47500]	Loss: 1.5005	LR: 0.025806
Training Epoch: 1 [12800/47500]	Loss: 1.7548	LR: 0.026344
Training Epoch: 1 [13056/47500]	Loss: 1.6601	LR: 0.026882
Training Epoch: 1 [13312/47500]	Loss: 1.5980	LR: 0.027419
Training Epoch: 1 [13568/47500]	Loss: 1.5276	LR: 0.027957
Training Epoch: 1 [13824/47500]	Loss: 1.6786	LR: 0.028495
Training Epoch: 1 [14080/47500]	Loss: 1.7857	LR: 0.029032
Training Epoch: 1 [14336/47500]	Loss: 1.7299	LR: 0.029570
Training Epoch: 1 [14592/47500]	Loss: 1.7086	LR: 0.030108
Training Epoch: 1 [14848/47500]	Loss: 1.4765	LR: 0.030645
Training Epoch: 1 [15104/47500]	Loss: 1.6427	LR: 0.031183
Training Epoch: 1 [15360/47500]	Loss: 1.4444	LR: 0.031720
Training Epoch: 1 [15616/47500]	Loss: 1.5649	LR: 0.032258
Training Epoch: 1 [15872/47500]	Loss: 1.6472	LR: 0.032796
Training Epoch: 1 [16128/47500]	Loss: 1.6169	LR: 0.033333
Training Epoch: 1 [16384/47500]	Loss: 1.4307	LR: 0.033871
Training Epoch: 1 [16640/47500]	Loss: 1.6108	LR: 0.034409
Training Epoch: 1 [16896/47500]	Loss: 1.5099	LR: 0.034946
Training Epoch: 1 [17152/47500]	Loss: 1.4482	LR: 0.035484
Training Epoch: 1 [17408/47500]	Loss: 1.4940	LR: 0.036022
Training Epoch: 1 [17664/47500]	Loss: 1.4621	LR: 0.036559
Training Epoch: 1 [17920/47500]	Loss: 1.5336	LR: 0.037097
Training Epoch: 1 [18176/47500]	Loss: 1.5816	LR: 0.037634
Training Epoch: 1 [18432/47500]	Loss: 1.3853	LR: 0.038172
Training Epoch: 1 [18688/47500]	Loss: 1.4840	LR: 0.038710
Training Epoch: 1 [18944/47500]	Loss: 1.4772	LR: 0.039247
Training Epoch: 1 [19200/47500]	Loss: 1.5130	LR: 0.039785
Training Epoch: 1 [19456/47500]	Loss: 1.4435	LR: 0.040323
Training Epoch: 1 [19712/47500]	Loss: 1.4369	LR: 0.040860
Training Epoch: 1 [19968/47500]	Loss: 1.3637	LR: 0.041398
Training Epoch: 1 [20224/47500]	Loss: 1.4164	LR: 0.041935
Training Epoch: 1 [20480/47500]	Loss: 1.5263	LR: 0.042473
Training Epoch: 1 [20736/47500]	Loss: 1.4234	LR: 0.043011
Training Epoch: 1 [20992/47500]	Loss: 1.3746	LR: 0.043548
Training Epoch: 1 [21248/47500]	Loss: 1.5336	LR: 0.044086
Training Epoch: 1 [21504/47500]	Loss: 1.4142	LR: 0.044624
Training Epoch: 1 [21760/47500]	Loss: 1.4625	LR: 0.045161
Training Epoch: 1 [22016/47500]	Loss: 1.5598	LR: 0.045699
Training Epoch: 1 [22272/47500]	Loss: 1.6346	LR: 0.046237
Training Epoch: 1 [22528/47500]	Loss: 1.6628	LR: 0.046774
Training Epoch: 1 [22784/47500]	Loss: 1.4738	LR: 0.047312
Training Epoch: 1 [23040/47500]	Loss: 1.3903	LR: 0.047849
Training Epoch: 1 [23296/47500]	Loss: 1.4172	LR: 0.048387
Training Epoch: 1 [23552/47500]	Loss: 1.6183	LR: 0.048925
Training Epoch: 1 [23808/47500]	Loss: 1.4210	LR: 0.049462
Training Epoch: 1 [24064/47500]	Loss: 1.5553	LR: 0.050000
Training Epoch: 1 [24320/47500]	Loss: 1.2472	LR: 0.050538
Training Epoch: 1 [24576/47500]	Loss: 1.5856	LR: 0.051075
Training Epoch: 1 [24832/47500]	Loss: 1.3336	LR: 0.051613
Training Epoch: 1 [25088/47500]	Loss: 1.4687	LR: 0.052151
Training Epoch: 1 [25344/47500]	Loss: 1.4029	LR: 0.052688
Training Epoch: 1 [25600/47500]	Loss: 1.3134	LR: 0.053226
Training Epoch: 1 [25856/47500]	Loss: 1.5934	LR: 0.053763
Training Epoch: 1 [26112/47500]	Loss: 1.5760	LR: 0.054301
Training Epoch: 1 [26368/47500]	Loss: 1.3169	LR: 0.054839
Training Epoch: 1 [26624/47500]	Loss: 1.4437	LR: 0.055376
Training Epoch: 1 [26880/47500]	Loss: 1.1867	LR: 0.055914
Training Epoch: 1 [27136/47500]	Loss: 1.7149	LR: 0.056452
Training Epoch: 1 [27392/47500]	Loss: 1.3290	LR: 0.056989
Training Epoch: 1 [27648/47500]	Loss: 1.5342	LR: 0.057527
Training Epoch: 1 [27904/47500]	Loss: 1.5118	LR: 0.058065
Training Epoch: 1 [28160/47500]	Loss: 1.2433	LR: 0.058602
Training Epoch: 1 [28416/47500]	Loss: 1.5266	LR: 0.059140
Training Epoch: 1 [28672/47500]	Loss: 1.5961	LR: 0.059677
Training Epoch: 1 [28928/47500]	Loss: 1.2773	LR: 0.060215
Training Epoch: 1 [29184/47500]	Loss: 1.3217	LR: 0.060753
Training Epoch: 1 [29440/47500]	Loss: 1.4651	LR: 0.061290
Training Epoch: 1 [29696/47500]	Loss: 1.3717	LR: 0.061828
Training Epoch: 1 [29952/47500]	Loss: 1.4582	LR: 0.062366
Training Epoch: 1 [30208/47500]	Loss: 1.3568	LR: 0.062903
Training Epoch: 1 [30464/47500]	Loss: 1.5796	LR: 0.063441
Training Epoch: 1 [30720/47500]	Loss: 1.3084	LR: 0.063978
Training Epoch: 1 [30976/47500]	Loss: 1.4052	LR: 0.064516
Training Epoch: 1 [31232/47500]	Loss: 1.3713	LR: 0.065054
Training Epoch: 1 [31488/47500]	Loss: 1.2695	LR: 0.065591
Training Epoch: 1 [31744/47500]	Loss: 1.3523	LR: 0.066129
Training Epoch: 1 [32000/47500]	Loss: 1.3059	LR: 0.066667
Training Epoch: 1 [32256/47500]	Loss: 1.3202	LR: 0.067204
Training Epoch: 1 [32512/47500]	Loss: 1.3357	LR: 0.067742
Training Epoch: 1 [32768/47500]	Loss: 1.3935	LR: 0.068280
Training Epoch: 1 [33024/47500]	Loss: 1.2854	LR: 0.068817
Training Epoch: 1 [33280/47500]	Loss: 1.2843	LR: 0.069355
Training Epoch: 1 [33536/47500]	Loss: 1.2914	LR: 0.069892
Training Epoch: 1 [33792/47500]	Loss: 1.4385	LR: 0.070430
Training Epoch: 1 [34048/47500]	Loss: 1.2765	LR: 0.070968
Training Epoch: 1 [34304/47500]	Loss: 1.5119	LR: 0.071505
Training Epoch: 1 [34560/47500]	Loss: 1.3266	LR: 0.072043
Training Epoch: 1 [34816/47500]	Loss: 1.3587	LR: 0.072581
Training Epoch: 1 [35072/47500]	Loss: 1.3722	LR: 0.073118
Training Epoch: 1 [35328/47500]	Loss: 1.2088	LR: 0.073656
Training Epoch: 1 [35584/47500]	Loss: 1.2057	LR: 0.074194
Training Epoch: 1 [35840/47500]	Loss: 1.3030	LR: 0.074731
Training Epoch: 1 [36096/47500]	Loss: 1.2346	LR: 0.075269
Training Epoch: 1 [36352/47500]	Loss: 1.2619	LR: 0.075806
Training Epoch: 1 [36608/47500]	Loss: 1.3602	LR: 0.076344
Training Epoch: 1 [36864/47500]	Loss: 1.2419	LR: 0.076882
Training Epoch: 1 [37120/47500]	Loss: 1.2943	LR: 0.077419
Training Epoch: 1 [37376/47500]	Loss: 1.2285	LR: 0.077957
Training Epoch: 1 [37632/47500]	Loss: 1.1984	LR: 0.078495
Training Epoch: 1 [37888/47500]	Loss: 1.2684	LR: 0.079032
Training Epoch: 1 [38144/47500]	Loss: 1.3003	LR: 0.079570
Training Epoch: 1 [38400/47500]	Loss: 1.2012	LR: 0.080108
Training Epoch: 1 [38656/47500]	Loss: 1.0697	LR: 0.080645
Training Epoch: 1 [38912/47500]	Loss: 1.3098	LR: 0.081183
Training Epoch: 1 [39168/47500]	Loss: 1.2895	LR: 0.081720
Training Epoch: 1 [39424/47500]	Loss: 1.1465	LR: 0.082258
Training Epoch: 1 [39680/47500]	Loss: 1.2294	LR: 0.082796
Training Epoch: 1 [39936/47500]	Loss: 1.2590	LR: 0.083333
Training Epoch: 1 [40192/47500]	Loss: 1.2246	LR: 0.083871
Training Epoch: 1 [40448/47500]	Loss: 1.2035	LR: 0.084409
Training Epoch: 1 [40704/47500]	Loss: 1.2441	LR: 0.084946
Training Epoch: 1 [40960/47500]	Loss: 1.1954	LR: 0.085484
Training Epoch: 1 [41216/47500]	Loss: 1.1969	LR: 0.086022
Training Epoch: 1 [41472/47500]	Loss: 1.3731	LR: 0.086559
Training Epoch: 1 [41728/47500]	Loss: 1.1399	LR: 0.087097
Training Epoch: 1 [41984/47500]	Loss: 1.3935	LR: 0.087634
Training Epoch: 1 [42240/47500]	Loss: 1.1990	LR: 0.088172
Training Epoch: 1 [42496/47500]	Loss: 1.2942	LR: 0.088710
Training Epoch: 1 [42752/47500]	Loss: 1.2377	LR: 0.089247
Training Epoch: 1 [43008/47500]	Loss: 1.2105	LR: 0.089785
Training Epoch: 1 [43264/47500]	Loss: 1.2223	LR: 0.090323
Training Epoch: 1 [43520/47500]	Loss: 1.3457	LR: 0.090860
Training Epoch: 1 [43776/47500]	Loss: 1.3050	LR: 0.091398
Training Epoch: 1 [44032/47500]	Loss: 1.2280	LR: 0.091935
Training Epoch: 1 [44288/47500]	Loss: 1.4652	LR: 0.092473
Training Epoch: 1 [44544/47500]	Loss: 1.2352	LR: 0.093011
Training Epoch: 1 [44800/47500]	Loss: 1.2343	LR: 0.093548
Training Epoch: 1 [45056/47500]	Loss: 1.1997	LR: 0.094086
Training Epoch: 1 [45312/47500]	Loss: 1.0635	LR: 0.094624
Training Epoch: 1 [45568/47500]	Loss: 1.2282	LR: 0.095161
Training Epoch: 1 [45824/47500]	Loss: 1.0844	LR: 0.095699
Training Epoch: 1 [46080/47500]	Loss: 1.1452	LR: 0.096237
Training Epoch: 1 [46336/47500]	Loss: 1.2830	LR: 0.096774
Training Epoch: 1 [46592/47500]	Loss: 1.0890	LR: 0.097312
Training Epoch: 1 [46848/47500]	Loss: 1.2960	LR: 0.097849
Training Epoch: 1 [47104/47500]	Loss: 1.1539	LR: 0.098387
Training Epoch: 1 [47360/47500]	Loss: 1.1131	LR: 0.098925
Training Epoch: 1 [47500/47500]	Loss: 1.2689	LR: 0.099462
Epoch 1 - Average Train Loss: 1.5199, Train Accuracy: 0.4499
Epoch 1 training time consumed: 19.66s
Evaluating Network.....
Test set: Epoch: 1, Average loss: 0.0047, Accuracy: 0.5866, Time consumed:1.76s
Saving weights file to checkpoint/retrain/ResNet18/Saturday_02_August_2025_16h_04m_54s/ResNet18-Cifar10-seed1-ret50-1-best.pth
Valid (Test) Dl:  10000
Train Dl:  50000
Retain Train Dl:  47500
Forget Train Dl:  2500
Retain Valid Dl:  47500
Forget Valid Dl:  2500
retain_prob Distribution: 10000 samples
test_prob Distribution: 10000 samples
forget_prob Distribution: 2500 samples
Set1 Distribution: 2500 samples
Set2 Distribution: 2500 samples
Set1 Distribution: 2500 samples
Set2 Distribution: 2500 samples
Set1 Distribution: 10000 samples
Set2 Distribution: 10000 samples
Set1 Distribution: 10000 samples
Set2 Distribution: 10000 samples
Test Accuracy: 58.896484375
Retain Accuracy: 60.10464859008789
Zero-Retain Forget (ZRF): 0.9217499494552612
Membership Inference Attack (MIA): 0.4452
Forget vs Retain Membership Inference Attack (MIA): 0.486
Forget vs Test Membership Inference Attack (MIA): 0.473
Test vs Retain Membership Inference Attack (MIA): 0.52975
Train vs Test Membership Inference Attack (MIA): 0.50625
Forget Set Accuracy (Df): 59.772003173828125
Method Execution Time: 906.95 seconds
